Recently, there has been increasing interest in synthesizing data to improve downstream text-to-SQL tasks. In this paper, we first examined the existing synthesized datasets and discovered that state-of-the-art text-to-SQL algorithms did not further improve on popular benchmarks when trained with augmented synthetic data. We observed two shortcomings: illogical synthetic SQL queries from independent column sampling and arbitrary table joins. To address these issues, we propose a novel synthesis framework that incorporates key relationships from schema, imposes strong typing, and conducts schema-distance-weighted column sampling. We also adopt an intermediate representation (IR) for the SQL-to-text task to further improve the quality of the generated natural language questions. When existing powerful semantic parsers are pre-finetuned on our high-quality synthesized data, our experiments show that these models have significant accuracy boosts on popular benchmarks, including new state-of-the-art performance on Spider.
translated by 谷歌翻译
本文调查了一种捍卫对抗性攻击的方法家族,其成功的部分原因是创造了嘈杂,不连续或不足的损失景观,而对手很难驾驶。实现这种效果的一种常见但不是普遍的方法是使用随机神经网络。我们表明,这是梯度混淆的一种形式,并根据Weierstrass变换提出了对基于梯度的对手的一般扩展,该变换平滑了损失函数的表面并提供了更可靠的梯度估计。我们进一步表明,相同的原则可以增强无梯度的对手。我们证明了消失方法对由于这种混淆而表现出鲁棒性的随机和非传统对抗防御的功效。此外,我们将分析它与对转型的期望相互作用。目前用于攻击随机防御的流行梯度采样方法。
translated by 谷歌翻译
当我们希望将其用作生成模型时,任何显式的功能表示$ f $都会受到两个主要障碍的阻碍:设计$ f $,以便采样快速,并估计$ z = \ int f $ ^{ - 1} f $集成到1。随着$ f $本身变得复杂,这变得越来越复杂。在本文中,我们表明,当通过让网络代表目标密度的累积分布函数并应用积极的基本定理,可以通过神经网络对一维条件密度进行建模时,可以精确地计算出$ z $。 。我们还得出了一种快速算法,用于通过逆变换方法从产生的表示。通过将这些原理扩展到更高的维度,我们介绍了\ textbf {神经逆变换采样器(NITS)},这是一个新颖的深度学习框架,用于建模和从一般,多维,紧凑的概率密度。 NIT是一个高度表达性的密度估计器,具有端到端的可不同性,快速采样以及精确且廉价的可能性评估。我们通过将其应用于现实,高维密度估计任务来证明NIT的适用性:基于CIFAR-10数据集对基于可能性的生成模型,以及基于基准数据集的UCI套件的密度估计,nits可以在其中产生令人信服的结果或超越或超越或超越或超越或超越或超越或超越或超越。艺术状态。
translated by 谷歌翻译
语言模型既展示了定量的改进,又展示了新的定性功能,随着规模的增加。尽管它们具有潜在的变革性影响,但这些新能力的特征却很差。为了为未来的研究提供信息,为破坏性的新模型能力做准备,并改善社会有害的效果,至关重要的是,我们必须了解目前和近乎未来的能力和语言模型的局限性。为了应对这一挑战,我们介绍了超越模仿游戏基准(Big Bench)。 Big Bench目前由204个任务组成,由132家机构的442位作者贡献。任务主题是多样的,从语言学,儿童发展,数学,常识性推理,生物学,物理学,社会偏见,软件开发等等。 Big-Bench专注于被认为超出当前语言模型的功能的任务。我们评估了OpenAI的GPT型号,Google内部密集变压器体系结构和大型基础上的开关稀疏变压器的行为,跨越了数百万到数十亿个参数。此外,一个人类专家评估者团队执行了所有任务,以提供强大的基准。研究结果包括:模型性能和校准都随规模改善,但绝对的术语(以及与评估者的性能相比);在模型类中的性能非常相似,尽管带有稀疏性。逐渐和预测的任务通常涉及大量知识或记忆成分,而在临界规模上表现出“突破性”行为的任务通常涉及多个步骤或组成部分或脆性指标;社交偏见通常会随着含糊不清的环境而随着规模而增加,但这可以通过提示来改善。
translated by 谷歌翻译
数码相机的加速使用引起了人们对隐私和安全性的日益关注,尤其是在诸如行动识别之类的应用程序中。在本文中,我们提出了一个优化框架,以沿着人类行动识别管道提供强大的视觉隐私保护。我们的框架参数化了相机镜头,以成功地降低视频的质量,以抑制隐私属性并防止对抗性攻击,同时保持相关功能以进行活动识别。我们通过广泛的模拟和硬件实验来验证我们的方法。
translated by 谷歌翻译
我们分析了在线性模型中同时支持恢复和估计的同时支持恢复和估计,具有独立的且相同分布的正常误差。我们基于随机栅极(STG)[YLNK20]的非线性惩罚来应用惩罚最小方估计值,以估计系数。考虑到高斯设计矩阵,我们表明在$ \β^ * $的尺寸和稀疏性的合理条件下,基于STG的估计器会聚到真实数据生成系数向量,并且还检测其具有高概率的支持集。我们提出了一种新的基于投影基于投影的线性模型设置,以提高现有的STG估算器,最初设计用于一般非线性模型。我们的新程序优于许多古典估算器,用于在合成数据分析中支持恢复。
translated by 谷歌翻译
We introduce a novel framework to track multiple objects in overhead camera videos for airport checkpoint security scenarios where targets correspond to passengers and their baggage items. We propose a Self-Supervised Learning (SSL) technique to provide the model information about instance segmentation uncertainty from overhead images. Our SSL approach improves object detection by employing a test-time data augmentation and a regression-based, rotation-invariant pseudo-label refinement technique. Our pseudo-label generation method provides multiple geometrically-transformed images as inputs to a Convolutional Neural Network (CNN), regresses the augmented detections generated by the network to reduce localization errors, and then clusters them using the mean-shift algorithm. The self-supervised detector model is used in a single-camera tracking algorithm to generate temporal identifiers for the targets. Our method also incorporates a multi-view trajectory association mechanism to maintain consistent temporal identifiers as passengers travel across camera views. An evaluation of detection, tracking, and association performances on videos obtained from multiple overhead cameras in a realistic airport checkpoint environment demonstrates the effectiveness of the proposed approach. Our results show that self-supervision improves object detection accuracy by up to $42\%$ without increasing the inference time of the model. Our multi-camera association method achieves up to $89\%$ multi-object tracking accuracy with an average computation time of less than $15$ ms.
translated by 谷歌翻译
The availability of frequent and cost-free satellite images is in growing demand in the research world. Such satellite constellations as Landsat 8 and Sentinel-2 provide a massive amount of valuable data daily. However, the discrepancy in the sensors' characteristics of these satellites makes it senseless to use a segmentation model trained on either dataset and applied to another, which is why domain adaptation techniques have recently become an active research area in remote sensing. In this paper, an experiment of domain adaptation through style-transferring is conducted using the HRSemI2I model to narrow the sensor discrepancy between Landsat 8 and Sentinel-2. This paper's main contribution is analyzing the expediency of that approach by comparing the results of segmentation using domain-adapted images with those without adaptation. The HRSemI2I model, adjusted to work with 6-band imagery, shows significant intersection-over-union performance improvement for both mean and per class metrics. A second contribution is providing different schemes of generalization between two label schemes - NALCMS 2015 and CORINE. The first scheme is standardization through higher-level land cover classes, and the second is through harmonization validation in the field.
translated by 谷歌翻译
When a human communicates with a machine using natural language on the web and online, how can it understand the human's intention and semantic context of their talk? This is an important AI task as it enables the machine to construct a sensible answer or perform a useful action for the human. Meaning is represented at the sentence level, identification of which is known as intent detection, and at the word level, a labelling task called slot filling. This dual-level joint task requires innovative thinking about natural language and deep learning network design, and as a result, many approaches and models have been proposed and applied. This tutorial will discuss how the joint task is set up and introduce Spoken Language Understanding/Natural Language Understanding (SLU/NLU) with Deep Learning techniques. We will cover the datasets, experiments and metrics used in the field. We will describe how the machine uses the latest NLP and Deep Learning techniques to address the joint task, including recurrent and attention-based Transformer networks and pre-trained models (e.g. BERT). We will then look in detail at a network that allows the two levels of the task, intent classification and slot filling, to interact to boost performance explicitly. We will do a code demonstration of a Python notebook for this model and attendees will have an opportunity to watch coding demo tasks on this joint NLU to further their understanding.
translated by 谷歌翻译
Datacenter operators ensure fair and regular server maintenance by using automated processes to schedule maintenance jobs to complete within a strict time budget. Automating this scheduling problem is challenging because maintenance job duration varies based on both job type and hardware. While it is tempting to use prior machine learning techniques for predicting job duration, we find that the structure of the maintenance job scheduling problem creates a unique challenge. In particular, we show that prior machine learning methods that produce the lowest error predictions do not produce the best scheduling outcomes due to asymmetric costs. Specifically, underpredicting maintenance job duration has results in more servers being taken offline and longer server downtime than overpredicting maintenance job duration. The system cost of underprediction is much larger than that of overprediction. We present Acela, a machine learning system for predicting maintenance job duration, which uses quantile regression to bias duration predictions toward overprediction. We integrate Acela into a maintenance job scheduler and evaluate it on datasets from large-scale, production datacenters. Compared to machine learning based predictors from prior work, Acela reduces the number of servers that are taken offline by 1.87-4.28X, and reduces the server offline time by 1.40-2.80X.
translated by 谷歌翻译